30 research outputs found
Average Case Analysis of the Classical Algorithm for Markov Decision Processes with B\"uchi Objectives
We consider Markov decision processes (MDPs) with -regular
specifications given as parity objectives. We consider the problem of computing
the set of almost-sure winning vertices from where the objective can be ensured
with probability 1. The algorithms for the computation of the almost-sure
winning set for parity objectives iteratively use the solutions for the
almost-sure winning set for B\"uchi objectives (a special case of parity
objectives). We study for the first time the average case complexity of the
classical algorithm for computing almost-sure winning vertices for MDPs with
B\"uchi objectives. Our contributions are as follows: First, we show that for
MDPs with constant out-degree the expected number of iterations is at most
logarithmic and the average case running time is linear (as compared to the
worst case linear number of iterations and quadratic time complexity). Second,
we show that for general MDPs the expected number of iterations is constant and
the average case running time is linear (again as compared to the worst case
linear number of iterations and quadratic time complexity). Finally we also
show that given all graphs are equally likely, the probability that the
classical algorithm requires more than constant number of iterations is
exponentially small
Interactive Data Exploration with Smart Drill-Down
We present {\em smart drill-down}, an operator for interactively exploring a
relational table to discover and summarize "interesting" groups of tuples. Each
group of tuples is described by a {\em rule}. For instance, the rule tells us that there are a thousand tuples with value in the
first column and in the second column (and any value in the third column).
Smart drill-down presents an analyst with a list of rules that together
describe interesting aspects of the table. The analyst can tailor the
definition of interesting, and can interactively apply smart drill-down on an
existing rule to explore that part of the table. We demonstrate that the
underlying optimization problems are {\sc NP-Hard}, and describe an algorithm
for finding the approximately optimal list of rules to display when the user
uses a smart drill-down, and a dynamic sampling scheme for efficiently
interacting with large tables. Finally, we perform experiments on real datasets
on our experimental prototype to demonstrate the usefulness of smart drill-down
and study the performance of our algorithms
Symbolic Algorithms for Qualitative Analysis of Markov Decision Processes with B\"uchi Objectives
We consider Markov decision processes (MDPs) with \omega-regular
specifications given as parity objectives. We consider the problem of computing
the set of almost-sure winning states from where the objective can be ensured
with probability 1. The algorithms for the computation of the almost-sure
winning set for parity objectives iteratively use the solutions for the
almost-sure winning set for B\"uchi objectives (a special case of parity
objectives). Our contributions are as follows: First, we present the first
subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs
with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as
compared to the previous known algorithm that takes O(n^2) symbolic steps,
where is the number of states and is the number of edges of the MDP. In
practice MDPs have constant out-degree, and then our symbolic algorithm takes
O(n \sqrt{n}) symbolic steps, as compared to the previous known
symbolic steps algorithm. Second, we present a new algorithm, namely win-lose
algorithm, with the following two properties: (a) the algorithm iteratively
computes subsets of the almost-sure winning set and its complement, as compared
to all previous algorithms that discover the almost-sure winning set upon
termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the
maximal number of edges of strongly connected components (scc's) of the MDP.
The win-lose algorithm requires symbolic computation of scc's. Third, we
improve the algorithm for symbolic scc computation; the previous known
algorithm takes linear symbolic steps, and our new algorithm improves the
constants associated with the linear number of steps. In the worst case the
previous known algorithm takes 5n symbolic steps, whereas our new algorithm
takes 4n symbolic steps
LIPIcs
We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objectives. We consider the problem of computing the set of almost-sure winning vertices from where the objective can be ensured with probability 1. We study for the first time the average case complexity of the classical algorithm for computing the set of almost-sure winning vertices for MDPs with Büchi objectives. Our contributions are as follows: First, we show that for MDPs with constant out-degree the expected number of iterations is at most logarithmic and the average case running time is linear (as compared to the worst case linear number of iterations and quadratic time complexity). Second, for the average case analysis over all MDPs we show that the expected number of iterations is constant and the average case running time is linear (again as compared to the worst case linear number of iterations and quadratic time complexity). Finally we also show that given that all MDPs are equally likely, the probability that the classical algorithm requires more than constant number of iterations is exponentially small
Comprehensive and Reliable Crowd Assessment Algorithms
Evaluating workers is a critical aspect of any crowdsourcing system. In this
paper, we devise techniques for evaluating workers by finding confidence
intervals on their error rates. Unlike prior work, we focus on
"conciseness"---that is, giving as tight a confidence interval as possible.
Conciseness is of utmost importance because it allows us to be sure that we
have the best guarantee possible on worker error rate. Also unlike prior work,
we provide techniques that work under very general scenarios, such as when not
all workers have attempted every task (a fairly common scenario in practice),
when tasks have non-boolean responses, and when workers have different biases
for positive and negative tasks. We demonstrate conciseness as well as accuracy
of our confidence intervals by testing them on a variety of conditions and
multiple real-world datasets.Comment: ICDE 201